异常解释是确定将样本与正常数据区分开的一组功能的任务,这对于下游(人)决策很重要。现有方法基于特征子集的空间中的光束搜索。它们在计算上很快变得昂贵,因为他们需要为每个功能子集从头开始运行异常检测算法。为了减轻这个问题,我们提出了一种基于总和网络(SPNS)(一类概率电路)的新型离群解释算法。我们的方法利用了SPN中边际推断的障碍,以计算特征子集中的离群分数。通过使用SPNS,可以向后消除而不是通常的前向光束搜索,这是可行的,该搜索不太容易在说明中缺少相关功能,尤其是当功能数量较大时。我们从经验上表明,我们的方法取得了最先进的结果,以实现异常说明,表现优于最近的基于搜索和深度学习的解释方法
translated by 谷歌翻译
我们考虑在无法访问网络培训数据(例如由于隐私或安全问题)的情况下为神经网络产生解释。最近,已经提出了$ \ Mathcal {i} $ - 网络是一种无样品后全球模型可解释性的方法,不需要访问培训数据。他们将解释作为机器学习任务,将网络表示(参数)映射到可解释功能的表示。在本文中,我们将$ \ Mathcal {i} $ - 网络框架扩展到标准和软决策树作为替代模型的情况。我们提出了相应的$ \ Mathcal {i} $ - 净输出层的合适决策树表示和设计。此外,我们通过在生成$ \ Mathcal {i} $ - NET的培训数据时考虑更现实的分布来制作适用于现实世界任务的NETS $ \ MATHCAL {I} $ - NETS。我们对传统的全球,事后解释性方法进行经验评估我们的方法,并表明当无法访问培训数据时,它可以取得优势。
translated by 谷歌翻译
As language models (LMs) scale, they develop many novel behaviors, good and bad, exacerbating the need to evaluate how they behave. Prior work creates evaluations with crowdwork (which is time-consuming and expensive) or existing data sources (which are not always available). Here, we automatically generate evaluations with LMs. We explore approaches with varying amounts of human effort, from instructing LMs to write yes/no questions to making complex Winogender schemas with multiple stages of LM-based generation and filtering. Crowdworkers rate the examples as highly relevant and agree with 90-100% of labels, sometimes more so than corresponding human-written datasets. We generate 154 datasets and discover new cases of inverse scaling where LMs get worse with size. Larger LMs repeat back a dialog user's preferred answer ("sycophancy") and express greater desire to pursue concerning goals like resource acquisition and goal preservation. We also find some of the first examples of inverse scaling in RL from Human Feedback (RLHF), where more RLHF makes LMs worse. For example, RLHF makes LMs express stronger political views (on gun rights and immigration) and a greater desire to avoid shut down. Overall, LM-written evaluations are high-quality and let us quickly discover many novel LM behaviors.
translated by 谷歌翻译
A reliable pose estimator robust to environmental disturbances is desirable for mobile robots. To this end, inertial measurement units (IMUs) play an important role because they can perceive the full motion state of the vehicle independently. However, it suffers from accumulative error due to inherent noise and bias instability, especially for low-cost sensors. In our previous studies on Wheel-INS \cite{niu2021, wu2021}, we proposed to limit the error drift of the pure inertial navigation system (INS) by mounting an IMU to the wheel of the robot to take advantage of rotation modulation. However, it still drifted over a long period of time due to the lack of external correction signals. In this letter, we propose to exploit the environmental perception ability of Wheel-INS to achieve simultaneous localization and mapping (SLAM) with only one IMU. To be specific, we use the road bank angles (mirrored by the robot roll angles estimated by Wheel-INS) as terrain features to enable the loop closure with a Rao-Blackwellized particle filter. The road bank angle is sampled and stored according to the robot position in the grid maps maintained by the particles. The weights of the particles are updated according to the difference between the currently estimated roll sequence and the terrain map. Field experiments suggest the feasibility of the idea to perform SLAM in Wheel-INS using the robot roll angle estimates. In addition, the positioning accuracy is improved significantly (more than 30\%) over Wheel-INS. Source code of our implementation is publicly available (https://github.com/i2Nav-WHU/Wheel-SLAM).
translated by 谷歌翻译
Developing safe and useful general-purpose AI systems will require us to make progress on scalable oversight: the problem of supervising systems that potentially outperform us on most skills relevant to the task at hand. Empirical work on this problem is not straightforward, since we do not yet have systems that broadly exceed our abilities. This paper discusses one of the major ways we think about this problem, with a focus on how to turn it into one that can be productively studied empirically. We first present an experimental design centered on choosing tasks for which human specialists succeed but unaided humans and current general AI systems fail. We then present a proof-of-concept experiment following meant to demonstrate a key feature of this experimental design and show its viability with two question-answering tasks: MMLU and time-limited QuALITY. On these tasks, we find that human participants who interact with an unreliable large-language-model dialog assistant through chat -- a trivial baseline strategy for scalable oversight -- substantially outperform both the model alone and their own unaided performance. These results are an encouraging sign that scalable oversight will be tractable to study with present models and bolster recent findings that large language models can productively assist humans with difficult tasks.
translated by 谷歌翻译
因果推理,经济学以及更普遍的一般机器学习中的重要问题可以表示为条件力矩限制,但是估计变得具有挑战性,因为它需要解决无条件的力矩限制的连续性。以前的工作通过将广义的矩(GMM)方法扩展到连续矩限制来解决此问题。相比之下,广义经验可能性(GEL)提供了一个更通用的框架,并且与基于GMM的估计器相比,已显示出具有优惠的小样本特性。为了从机器学习的最新发展中受益,我们提供了可以利用任意模型的凝胶的功能重新重新制定。通过对所得无限尺寸优化问题的双重配方的激励,我们设计了一种实用方法并探索其渐近性能。最后,我们提供基于内核和基于神经网络的估计器实现,这些实现在两个条件矩限制问题上实现了最先进的经验绩效。
translated by 谷歌翻译
Multiple lines of evidence strongly suggest that infection hotspots, where a single individual infects many others, play a key role in the transmission dynamics of COVID-19. However, most of the existing epidemiological models fail to capture this aspect by neither representing the sites visited by individuals explicitly nor characterizing disease transmission as a function of individual mobility patterns. In this work, we introduce a temporal point process modeling framework that specifically represents visits to the sites where individuals get in contact and infect each other. Under our model, the number of infections caused by an infectious individual naturally emerges to be overdispersed. Using an efficient sampling algorithm, we demonstrate how to estimate the transmission rate of infectious individuals at the sites they visit and in their households using Bayesian optimization and longitudinal case data. Simulations using fine-grained and publicly available demographic data and site locations from Bern, Switzerland showcase the flexibility of our framework. To facilitate research and analyses of other cities and regions, we release an open-source implementation of our framework.
translated by 谷歌翻译